feat: add reasoning effort configuration to model request#20
feat: add reasoning effort configuration to model request#20xavidop merged 2 commits intogenkit-ai:mainfrom
Conversation
|
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
Summary of ChangesHello @crazywako, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the Azure AI Foundry plugin by adding a new configuration option, Highlights
Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a reasoningEffort configuration option for model requests, specifically targeting gpt-5 models to control their reasoning level and potentially speed up responses. The implementation correctly adds the new field to the model configuration, extracts it from the request, and applies it when building the chat completion parameters. My feedback includes a suggestion to refactor the logic that maps the string value to the corresponding constant, aiming for better code maintainability.
|
@crazywako can you please apply the Gemini code assist suggestions? |
xavidop
left a comment
There was a problem hiding this comment.
apply thegemini code assist improvements
|
I applied the Gemini suggestions and made a commit. |
## [1.2.0](v1.1.6...v1.2.0) (2026-02-11) ### 🚀 Features * add reasoning effort configuration to model request ([#20](#20)) ([53af95f](53af95f)) * force release ([758a741](758a741)) ### 🐛 Bug Fixes * added licencese ([b1b411c](b1b411c)) ### ⚙️ Continuous Integration * **deps:** bump actions/checkout from 4 to 6 ([#17](#17)) ([f7fb6d6](f7fb6d6))
|
🎉 This PR is included in version 1.2.0 🎉 The release is available on GitHub release Your semantic-release bot 📦🚀 |
Added a reasoning effort to speed up the output for gpt-5 models. All gpt-5 models are reasoning models, making those slow by default, even for gpt-5-nano.
After this you may set the reasoning_effort into the request to either "none", "minimal", "low", "medium", "high", "xhigh"
ai.WithConfig(map[string]interface{}{ "reasoningEffort": "minimal", }))